全文获取类型
收费全文 | 16529篇 |
免费 | 3370篇 |
国内免费 | 2309篇 |
专业分类
电工技术 | 569篇 |
技术理论 | 4篇 |
综合类 | 2232篇 |
化学工业 | 180篇 |
金属工艺 | 294篇 |
机械仪表 | 1253篇 |
建筑科学 | 595篇 |
矿业工程 | 150篇 |
能源动力 | 76篇 |
轻工业 | 307篇 |
水利工程 | 134篇 |
石油天然气 | 137篇 |
武器工业 | 123篇 |
无线电 | 2260篇 |
一般工业技术 | 1218篇 |
冶金工业 | 598篇 |
原子能技术 | 32篇 |
自动化技术 | 12046篇 |
出版年
2024年 | 112篇 |
2023年 | 467篇 |
2022年 | 755篇 |
2021年 | 780篇 |
2020年 | 746篇 |
2019年 | 546篇 |
2018年 | 471篇 |
2017年 | 533篇 |
2016年 | 646篇 |
2015年 | 684篇 |
2014年 | 968篇 |
2013年 | 1054篇 |
2012年 | 1193篇 |
2011年 | 1278篇 |
2010年 | 1149篇 |
2009年 | 1176篇 |
2008年 | 1236篇 |
2007年 | 1321篇 |
2006年 | 1149篇 |
2005年 | 1040篇 |
2004年 | 894篇 |
2003年 | 793篇 |
2002年 | 590篇 |
2001年 | 502篇 |
2000年 | 409篇 |
1999年 | 361篇 |
1998年 | 230篇 |
1997年 | 142篇 |
1996年 | 139篇 |
1995年 | 158篇 |
1994年 | 121篇 |
1993年 | 100篇 |
1992年 | 84篇 |
1991年 | 67篇 |
1990年 | 49篇 |
1989年 | 64篇 |
1988年 | 46篇 |
1987年 | 19篇 |
1986年 | 19篇 |
1985年 | 26篇 |
1984年 | 10篇 |
1983年 | 14篇 |
1982年 | 5篇 |
1981年 | 6篇 |
1980年 | 7篇 |
1979年 | 7篇 |
1963年 | 4篇 |
1960年 | 3篇 |
1959年 | 3篇 |
1958年 | 3篇 |
排序方式: 共有10000条查询结果,搜索用时 22 毫秒
1.
电力系统维护是电力系统稳定运行的重要保障,应用智能算法的无人机电力巡检则为电力系统维护提供便捷。电力线提取是自主电力巡检以及保障飞行器低空飞行安全的关键技术,结合深度学习理论进行电力线提取是电力巡检的重要突破点。本文将深度学习方法用于电力线提取任务,结合电力线图像特点嵌入改进的图像输入策略和注意力模块,提出一种基于阶段注意力机制的电力线提取模型(SA-Unet)。本文提出的SA-Unet模型编码阶段采用阶段输入融合策略(Stage input fusion strategy, SIFS),充分利用图像的多尺度信息减少空间位置信息丢失。解码阶段通过嵌入阶段注意力模块(Stage attention module,SAM)聚焦电力线特征,从大量信息中快速筛选出高价值信息。实验结果表明,该方法在复杂背景的多场景中具有良好的性能。 相似文献
2.
自动化实体描述生成有助于进一步提升知识图谱的应用价值,而流畅度高是实体描述文本的重要质量指标之一。该文提出使用知识库上多跳的事实来进行实体描述生成,从而贴近人工编撰的实体描述的行文风格,提升实体描述的流畅度。该文使用编码器—解码器框架,提出了一个端到端的神经网络模型,可以编码多跳的事实,并在解码器中使用关注机制对多跳事实进行表示。该文的实验结果表明,与基线模型相比,引入多跳事实后模型的BLEU-2和ROUGE-L等自动化指标分别提升约8.9个百分点和7.3个百分点。 相似文献
3.
就经典分水岭图像分割算法中存在的过分割问题,提出一种结合位图切割和区域合并的彩色图像分割算法。对原始彩色图像通过空域梯度算子求其梯度图像,并利用位图切割重建梯度图像;对新梯度图像进行分水岭预分割;对预分割图像基于异质性最小原则进行区域合并,并获得最终分割结果。相比于现有的同类方法,该算法引入位图切割,抑制噪声对分割结果的影响,在边缘模糊处分割准确,得到符合人类视觉的较小分割区域数目,同时在运行效率上提高。 相似文献
4.
Computer‐Interpretable Guidelines (CIGs) are the dominant medium for the delivery of clinical decision support, given the evidence‐based nature of their source material. Therefore, these machine‐readable versions have the ability to improve practitioner performance and conformance to standards, with availability at the point and time of care. The formalisation of Clinical Practice Guideline knowledge in a machine‐readable format is a crucial task to make it suitable for the integration in Clinical Decision Support Systems. However, the current tools for this purpose reveal shortcomings with respect to their ease of use and the support offered during CIG acquisition and editing. In this work, we characterise the current landscape of CIG acquisition tools based on the properties of guideline visualisation, organisation, simplicity, automation, manipulation of knowledge elements, and guideline storage and dissemination. Additionally, we describe the CompGuide Editor, a tool for the acquisition of CIGs in the CompGuide model for Clinical Practice Guidelines that also allows the editing of previously encoded guidelines. The Editor guides the users throughout the process of guideline encoding and does not require proficiency in any programming language. The features of the CIG encoding process are revealed through a comparison with already established tools for CIG acquisition. 相似文献
5.
针对现有图形模糊聚类算法合理性差和抗噪能力弱的问题,提出嵌入对称正则项的图形模糊聚类鲁棒算法。将样本聚类所对应的中立度与拒分度相结合构造对称正则项,嵌入现有图形模糊聚类所对应的目标函数;同时,利用像素邻域所对应的均值信息辅助当前像素聚类并构造了空间信息约束正则项,采用拉格朗日乘子法获得正则化图形模糊聚类鲁棒分割算法。不同噪声干扰图像分割结果表明,所建议的分割算法是有效的,相比现有的鲁棒模糊聚类分割算法具有更强的抑制噪声能力。 相似文献
6.
7.
Numerical simulation techniques such as Finite Element Analyses are essential in today's engineering design practices. However, comprehensive knowledge is required for the setup of reliable simulations to verify strength and further product properties. Due to limited capacities, design-accompanying simulations are performed too rarely by experienced simulation engineers. Therefore, product models are not sufficiently verified or the simulations lead to wrong design decisions, if they are applied by less experienced users. This results in belated redesigns of already detailed product models and to highly cost- and time-intensive iterations in product development.Thus, in order to support less experienced simulation users in setting up reliable Finite Element Analyses, a novel ontology-based approach is presented. The knowledge management tools developed on the basis of this approach allow an automated acquisition and target-oriented provision of necessary simulation knowledge. This knowledge is acquired from existing simulation models and text-based documentations from previous product developments by Text and Data Mining. By offering support to less experienced simulation users, the presented approach may finally lead to a more efficient and extensive application of reliable FEA in product development. 相似文献
8.
9.
Shape segmentation from point cloud data is a core step of the digital twinning process for industrial facilities. However, it is also a very labor intensive step, which counteracts the perceived value of the resulting model. The state-of-the-art method for automating cylinder detection can detect cylinders with 62% precision and 70% recall, while other shapes must then be segmented manually and shape segmentation is not achieved. This performance is promising, but it is far from drastically eliminating the manual labor cost. We argue that the use of class segmentation deep learning algorithms has the theoretical potential to perform better in terms of per point accuracy and less manual segmentation time needed. However, such algorithms could not be used so far due to the lack of a pre-trained dataset of laser scanned industrial shapes as well as the lack of appropriate geometric features in order to learn these shapes. In this paper, we tackle both problems in three steps. First, we parse the industrial point cloud through a novel class segmentation solution (CLOI-NET) that consists of an optimized PointNET++ based deep learning network and post-processing algorithms that enforce stronger contextual relationships per point. We then allow the user to choose the optimal manual annotation of a test facility by means of active learning to further improve the results. We achieve the first step by clustering points in meaningful spatial 3D windows based on their location. Then, we apply a class segmentation deep network, and output a probability distribution of all label categories per point and improve the predicted labels by enforcing post-processing rules. We finally optimize the results by finding the optimal amount of data to be used for training experiments. We validate our method on the largest richly annotated dataset of the most important to model industrial shapes (CLOI) and yield 82% average accuracy per point, 95.6% average AUC among all classes and estimated 70% labor hour savings in class segmentation. This proves that it is the first to automatically segment industrial point cloud shapes with no prior knowledge at commercially viable performance and is the foundation for efficient industrial shape modeling in cluttered point clouds. 相似文献
10.
为将系统故障演化过程(system fault evolution process,SFEP)的文本描述转化为空间故障网络(space fault network,SFN)结构,用于故障分析,本文提出SFEP文本因果关系提取方法,及其与SFN基本结构的转化方法。首先给出SFEP中事件的几种典型因果关系。随后提出因果关系与SFN基本结构的转化流程。本文方法围绕着关键字和因果关系组模式展开,通过模型的不断学习补充和丰富关键字和组模式。最终使方法具备将SFEP文本转化为SFN结构的能力。以飞机起落架故障发生过程文本为例进行了应用,实验结果表明该方法可用于SFEP文本中的因果关系分析,并得到了理想的SFN。完善的关键字和组模式有利于使用计算机智能处理SFEP的SFN。 相似文献